Long Run Equilibrium in Discounted Stochastic Fictitious Play

نویسندگان

  • Noah Williams
  • NOAH WILLIAMS
چکیده

In this paper we develop methods to analyze the long run behavior of models with multiple equilibria, and we apply them to a well known model of learning in games. Our methods apply to discrete-time continuous-state stochastic models, and as a particular application in we study a model of stochastic fictitious play. We focus on a variant of this model in which agents’ payoffs are subject to random shocks and they discount past observations exponentially. We analyze the behavior of agents’ beliefs as the discount rate on past information becomes small but the payoff shock variance remains fixed. We show that agents tend to be drawn toward an equilibrium, but occasionally the stochastic shocks lead agents to endogenously shift between equilibria. We then calculate the invariant distribution of players’ beliefs, and use it to determine the most likely outcome observed in long run. Our application shows that by making some slight changes to a standard learning model, we can derive an equilibrium selection criterion similar to stochastic evolutionary models but with some important differences.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stability and Long Run Equilibrium in Stochastic Fictitious Play

In this paper we develop methods to analyze the long run behavior of models with multiple stable equilibria, and we apply them to a well known model of learning in games. Our methods apply to discrete-time continuous-state stochastic models, and as a particular application in we study a model of stochastic fictitious play. We focus on a variant of this model in which agents’ payoffs are subject...

متن کامل

Evolution in games with randomly disturbed payoffs

We consider a simple model of stochastic evolution in population games. In our model, each agent occasionally receives opportunities to update his choice of strategy. When such an opportunity arises, the agent selects a strategy that is currently optimal, but only after his payoffs have been randomly perturbed. We prove that the resulting evolutionary process converges to approximate Nash equil...

متن کامل

On the Nonconvergence of Fictitious Play in Coordination Games

It is shown by example that learning rules of the fictitious play type fail to converge in certain kinds of coordination games. Variants of fictitious play in which past actions are eventually forgotten and that incorporate small stochastic perturbations are better behaved for this class of games: over the long run, players coordinate with probability one. Journal of Economic Literature Classif...

متن کامل

Stochastic fictitious play with continuous action sets

Continuous action space games form a natural extension to normal form games with finite action sets. However, whilst learning dynamics in normal form games are now well studied, it is not until recently that their continuous action space counterparts have been examined. We extend stochastic fictitious play to the continuous action space framework. In normal form games the limiting behaviour of ...

متن کامل

Fictitious play in stochastic games

In this paper we examine an extension of the fictitious play process for bimatrix games to stochastic games. We show that the fictitious play process does not necessarily converge, not even in the 2 × 2 × 2 case with a unique equilibrium in stationary strategies. Here 2 × 2 × 2 stands for 2 players, 2 states, 2 actions for each player in each state.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014